Goto

Collaborating Authors

 virtual marker


Mesh-Gait: A Unified Framework for Gait Recognition Through Multi-Modal Representation Learning from 2D Silhouettes

Wang, Zhao-Yang, Chen, Jieneng, Liu, Jiang, Guo, Yuxiang, Chellappa, Rama

arXiv.org Artificial Intelligence

Gait recognition, a fundamental biometric technology, leverages unique walking patterns for individual identification, typically using 2D representations such as silhouettes or skeletons. However, these methods often struggle with viewpoint variations, occlusions, and noise. Multi-modal approaches that incorporate 3D body shape information offer improved robustness but are computationally expensive, limiting their feasibility for real-time applications. To address these challenges, we introduce Mesh-Gait, a novel end-to-end multi-modal gait recognition framework that directly reconstructs 3D representations from 2D silhouettes, effectively combining the strengths of both modalities. Compared to existing methods, directly learning 3D features from 3D joints or meshes is complex and difficult to fuse with silhouette-based gait features. To overcome this, Mesh-Gait reconstructs 3D heatmaps as an intermediate representation, enabling the model to effectively capture 3D geometric information while maintaining simplicity and computational efficiency. During training, the intermediate 3D heatmaps are gradually reconstructed and become increasingly accurate under supervised learning, where the loss is calculated between the reconstructed 3D joints, virtual markers, and 3D meshes and their corresponding ground truth, ensuring precise spatial alignment and consistent 3D structure. Mesh-Gait extracts discriminative features from both silhouettes and reconstructed 3D heatmaps in a computationally efficient manner. This design enables the model to capture spatial and structural gait characteristics while avoiding the heavy overhead of direct 3D reconstruction from RGB videos, allowing the network to focus on motion dynamics rather than irrelevant visual details. Extensive experiments demonstrate that Mesh-Gait achieves state-of-the-art accuracy. The code will be released upon acceptance of the paper.


BioPose: Biomechanically-accurate 3D Pose Estimation from Monocular Videos

Koleini, Farnoosh, Saleem, Muhammad Usama, Wang, Pu, Xue, Hongfei, Helmy, Ahmed, Fenwick, Abbey

arXiv.org Artificial Intelligence

Recent advancements in 3D human pose estimation from single-camera images and videos have relied on parametric models, like SMPL. However, these models oversimplify anatomical structures, limiting their accuracy in capturing true joint locations and movements, which reduces their applicability in biomechanics, healthcare, and robotics. Biomechanically accurate pose estimation, on the other hand, typically requires costly marker-based motion capture systems and optimization techniques in specialized labs. To bridge this gap, we propose BioPose, a novel learning-based framework for predicting biomechanically accurate 3D human pose directly from monocular videos. BioPose includes three key components: a Multi-Query Human Mesh Recovery model (MQ-HMR), a Neural Inverse Kinematics (NeurIK) model, and a 2D-informed pose refinement technique. MQ-HMR leverages a multi-query deformable transformer to extract multi-scale fine-grained image features, enabling precise human mesh recovery. NeurIK treats the mesh vertices as virtual markers, applying a spatial-temporal network to regress biomechanically accurate 3D poses under anatomical constraints. To further improve 3D pose estimations, a 2D-informed refinement step optimizes the query tokens during inference by aligning the 3D structure with 2D pose observations. Experiments on benchmark datasets demonstrate that BioPose significantly outperforms state-of-the-art methods. Project website: \url{https://m-usamasaleem.github.io/publication/BioPose/BioPose.html}.


Object Augmentation Algorithm: Computing virtual object motion and object induced interaction wrench from optical markers

Herneth, Christopher, Li, Junnan, Fatoni, Muhammad Hilman, Ganguly, Amartya, Haddadin, Sami

arXiv.org Artificial Intelligence

This study addresses the critical need for diverse and comprehensive data focused on human arm joint torques while performing activities of daily living (ADL). Previous studies have often overlooked the influence of objects on joint torques during ADL, resulting in limited datasets for analysis. To address this gap, we propose an Object Augmentation Algorithm (OAA) capable of augmenting existing marker-based databases with virtual object motions and object-induced joint torque estimations. The OAA consists of five phases: (1) computing hand coordinate systems from optical markers, (2) characterising object movements with virtual markers, (3) calculating object motions through inverse kinematics (IK), (4) determining the wrench necessary for prescribed object motion using inverse dynamics (ID), and (5) computing joint torques resulting from object manipulation. The algorithm's accuracy is validated through trajectory tracking and torque analysis on a 7+4 degree of freedom (DoF) robotic hand-arm system, manipulating three unique objects. The results show that the OAA can accurately and precisely estimate 6 DoF object motion and object-induced joint torques. Correlations between computed and measured quantities were > 0.99 for object trajectories and > 0.93 for joint torques. The OAA was further shown to be robust to variations in the number and placement of input markers, which are expected between databases. Differences between repeated experiments were minor but significant (p < 0.05). The algorithm expands the scope of available data and facilitates more comprehensive analyses of human-object interaction dynamics.

  Country:
  Genre: Research Report > New Finding (0.86)
  Industry: Health & Medicine (0.94)

Comparative Analysis of Programming by Demonstration Methods: Kinesthetic Teaching vs Human Demonstration

Maric, Bruno, Zoric, Filip, Petric, Frano, Orsag, Matko

arXiv.org Artificial Intelligence

Programming by demonstration (PbD) is a simple and efficient way to program robots without explicit robot programming. PbD enables unskilled operators to easily demonstrate and guide different robots to execute task. In this paper we present comparison of demonstration methods with comprehensive user study. Each participant had to demonstrate drawing simple pattern with human demonstration using virtual marker and kinesthetic teaching with robot manipulator. To evaluate differences between demonstration methods, we conducted user study with 24 participants which filled out NASA raw task load index (rTLX) and system usability scale (SUS). We also evaluated similarity of the executed trajectories to measure difference between demonstrated and ideal trajectory. We concluded study with finding that human demonstration using a virtual marker is on average 8 times faster, superior in terms of quality and imposes 2 times less overall workload than kinesthetic teaching.